34 research outputs found

    Robust Multi-Sensor Fusion: A Decision-Theoretic Approach

    Get PDF
    Many tasks in active perception require that we be able to combine different information from a variety of sensors that relate to one or more features of the environment. Prior to combining these data, we must test our observations for consistency. The purpose of this paper is to examine sensor fusion problems for linear location data models using statistical decision theory (SDT). The contribution of this paper is the application of SDT to obtain: (i) a robust test of the hypothesis that data from different sensors are consistent; and (ii) a robust procedure for combining the data that pass this preliminary consistency test. Here, robustness refers to the statistical effectiveness of the decision rules when the probability distributions of the observation noise and the a priori position information associated with the individual sensors are uncertain. The standard linear location data model refers to observations of the form: Z = Ï´ + V, where V represents additive sensor noise and Ï´ denotes the sensed parameter of interest to the observer. While the theory addressed in this paper applies to many uncertainty classes, the primary focus of this paper is on asymmetric and/or multimodal models, that allow one to account for very general deviations from nominal sampling distributions. This paper extends earlier results in SDT and multi-sensor fusion obtained by [Zeytinoglu and Mintz, 1984], [Zeytinoglu and Mintz, 1988], and [McKendall and Mintz, 1988]

    Sensor-Fusion With Statistical Decision Theory: A Prospectus of Research in the GRASP Lab

    Get PDF
    The purpose of this report is to describe research in sensor fusion with statistical decision theory in the GRASP Lab, Department of Computer and Information Science, University of Pennsylvania. This report is thus a tutorial overview of the general research problem, the mathematical framework for the analysis, the results of specific research problems, and directions of future research. The intended audience for this report includes readers seeking a self-contained summary of the research as well as students considering study in this area. The prerequisite for understanding this report is familiarity with basic mathematical statistics

    Feature-Based Localization Using Fixed Ultrasonic Transducers

    Get PDF
    We describe an approach for mobile robot localization based on geometric features extracted from ultrasonic data. As is well known, a single sonar measurement using a standard POLAROIDTM sensor, though yielding relatively accurate information regarding the range of a reflective surface patch, provides scant information about the location in azimuth or elevation of that patch. This lack of sufficiently precise localization of the reflective patch hampers any attempt at data association, clustering of multiple measurements or subsequent classification and inference. In previous work [15, 16] we proposed a multi-stage approach to clustering which aggregates sonic data accumulated from arbitrary transducer locations in a sequential fashion. It is computationally tractable and efficient despite the inherent exponential nature of clustering, and is robust in the face of noise in the measurements. It therefore lends itself to applications where the transducers are fixed relative to the mobile platform, where remaining stationary during a scan is both impractical and infeasible, and where deadreckoning errors can be substantial. In the current work we apply this feature extraction algorithm to the problem of localization in a partially known environment. Feature-based localization boasts advantages in robustness and speed over several other approaches. We limit the set of extracted features to planar surfaces. We describe an approach for establishing correspondences between extracted and map features. Once such correspondences have been established, a least squares approach to mobile robot pose estimation is delineated. It is shown that once correspondence has been found, the pose estimation may be performed in time linear in the number of extracted features. The decoupling of the correspondence matching and estimation stages is shown to offer advantages in speed and precision. Since the clustering algorithm aggregates sonic data accumulated from arbitrary transducer locations, there are no constraints on the trajectory to be followed for localization except that sufficiently large portions of features be ensonified to allow clustering. Preliminary experiments indicate the usefulness of the approach, especially for accurate estimation of orientation

    Computational Methods for Task-Directed Sensor Data Fusion and Sensor Planning

    Get PDF
    In this paper, we consider the problem of task-directed information gathering. We first develop a decision-theoretic model of task-directed sensing in which sensors are modeled as noise-contaminated, uncertain measurement systems and sensing tasks are modeled by a transformation describing the type of information required by the task, a utility function describing sensitivity to error, and a cost function describing time or resource constraints on the system. This description allows us to develop a standard conditional Bayes decision-making model where the value of information, or payoff, of an estimate is defined as the average utility (the expected value of some function of decision or estimation error) relative to the current probability distribution and the best estimate is that which maximizes payoff. The optimal sensor viewing strategy is that which maximizes the net payoff (decision value minus observation costs) of the final estimate. The advantage of this solution is generality--it does not assume a particular sensing modality or sensing task. However, solutions to this updating problem do not exist in closed-form. This, motivates the development of an approximation to the optimal solution based on a grid-based implementation of Bayes\u27 theorem. We describe this algorithm, analyze its error properties, and indicate how it can be made robust to errors in the description of sensors and discrepancies between geometric models and sensed objects. We also present the results of this fusion technique applied to several different information gathering tasks in simulated situations and in a distributed sensing system we have constructed

    Non-Monotonic Decision Rules for Sensor Fusion

    Get PDF
    This article describes non-monotonic estimators of a location parameter from a noisy measurement Z = Ɵ + V when the possible values of e have the form (0, ± 1, ± 2,. . . , ± n}. If the noise V is Cauchy, then the estimator is a non-monotonic step function. The shape of this rule reflects the non-monotonic shape of the likelihood ratio of a Cauchy random variable. If the noise V is Gaussian with one of two possible scales, then the estimator is also a nonmonotonic step function. The shape this rule reflects the non-monotonic shape of the likelihood ratio of the marginal distribution of Z given Ɵ under a least-favorable prior distribution

    Multivariate Data Fusion Based on Fixed-Geometry Confidence Sets

    Get PDF
    The successful design and operation of autonomous or partially autonomous vehicles which are capable of traversing uncertain terrains requires the application of multiple sensors for tasks such as: local navigation, terrain evaluation, and feature recognition. In applications which include a teleoperation mode, there remains a serious need for local data reduction and decision-making to avoid the costly or impractical transmission of vast quantities of sensory data to a remote operator. There are several reasons to include multi-sensor fusion in a system design: (i) it allows the designer to combine intrinsically dissimilar data from several sensors to infer some property or properties of the environment, which no single sensor could otherwise obtain; and (ii) it allows the system designer to build a robust system by using partially redundant sources of noisy or otherwise uncertain information. At present, the epistemology of multi-sensor fusion is incomplete. Basic research topics include the following task-related issues: (i) the value of a sensor suite; (ii) the layout, positioning, and control of sensors (as agents); (iii) the marginal value of sensor information; the value of sensing-time versus some measure of error reduction, e.g., statistical efficiency; (iv) the role of sensor models, as well as a priori models of the environment; and (v) the calculus or calculi by which consistent sensor data are determined and combined

    Adaptive Image Segmentation

    Get PDF
    This paper introduces a general purpose scene segmentation system based on the model that the gradient value at region borders exceeds the gradient within regions. All internal and external parameters are identified and discussed, and the methods of selecting their values are specified. User-provided external parameters are based on segmentation scale: the approximate number of regions (within 50%) and typical perimeter:area ratio of objects of interest. Internal variables are assigned values adaptively, based on image data and the external parameters. The algorithm for region formation combines detected edges and a classical region growing procedure which is shown to perform better than either method alone. A confidence measure in the result is provided automatically, based on the match of the actual segmentation to the original model. Using this measure, there is confirmation whether or not the model and the external parameters are appropriate to the image data. This system is tested on many domains, including aerial photographs, small objects on plain and textured backgrounds, CT scans, stained brain tissue sections, white noise only and laser range images. The system is intended to be applied as one module in a larger vision system. The confidence measure provides a means to integrate the result of this segmentation and segmentations based on other modules. This system is also internally modular, so that another segmentation algorithm or another region formation algorithm could be included without redesigning the entire system

    Cooperative Control for Localization of Mobile Sensor Networks

    Get PDF
    In this paper, we consider the problem of cooperatively control a formation of networked mobile robots/vehicles to optimize the relative and absolute localization performance in 1D and 2D space. A framework for active perception is presented utilizing a graphical representation of sensory information obtained from the robot network. Performance measures are proposed that capture the estimate quality of team localization. We show that these measures directly depend on the sensing graph and shape of the formation. This dependence motivates implementation of a gradient based control scheme to adapt the formation geometry in order to optimize team localization performance. This approach is illustrated through application to a cooperative target localization problem involving a small robot team. Simulation results are presented using experimentally validated noise models

    Cooperative Material Handling by Human and Robotic Agents:Module Development and System Synthesis

    Get PDF
    In this paper we present the results of a collaborative effort to design and implement a system for cooperative material handling by a small team of human and robotic agents in an unstructured indoor environment. Our approach makes fundamental use of human agents\u27 expertise for aspects of task planning, task monitoring, and error recovery. Our system is neither fully autonomous nor fully teleoperated. It is designed to make effective use of human abilities within the present state of the art of autonomous systems. It is designed to allow for and promote cooperative interaction between distributed agents with various capabilities and resources. Our robotic agents refer to systems which are each equipped with at least one sensing modality and which possess some capability for self-orientation and/or mobility. Our robotic agents are not required to be homogeneous with respect to either capabilities or function. Our research stresses both paradigms and testbed experimentation. Theory issues include the requisite coordination principles and techniques which are fundamental to the basic functioning of such a cooperative multi-agent system. We have constructed a testbed facility for experimenting with distributed multi-agent architectures. The required modular components of this testbed are currently operational and have been tested individually. Our current research focuses on the integration of agents in a scenario for cooperative material handling

    The Human Sweet Tooth

    Get PDF
    Humans love the taste of sugar and the word "sweet" is used to describe not only this basic taste quality but also something that is desirable or pleasurable, e.g., la dolce vita. Although sugar or sweetened foods are generally among the most preferred choices, not everyone likes sugar, especially at high concentrations. The focus of my group's research is to understand why some people have a sweet tooth and others do not. We have used genetic and molecular techniques in humans, rats, mice, cats and primates to understand the origins of sweet taste perception. Our studies demonstrate that there are two sweet receptor genes (TAS1R2 and TAS1R3), and alleles of one of the two genes predict the avidity with which some mammals drink sweet solutions. We also find a relationship between sweet and bitter perception. Children who are genetically more sensitive to bitter compounds report that very sweet solutions are more pleasant and they prefer sweet carbonated beverages more than milk, relative to less bitter-sensitive peers. Overall, people differ in their ability to perceive the basic tastes, and particular constellations of genes and experience may drive some people, but not others, toward a caries-inducing sweet diet. Future studies will be designed to understand how a genetic preference for sweet food and drink might contribute to the development of dental caries
    corecore